关于语言引导的图像操纵的最新作品在提供丰富的语义方面表现出了极大的语言力量,尤其是对于面部图像。但是,语言中的其他自然信息,动作的探索较少。在本文中,我们利用运动信息并研究一项新颖的任务,语言引导的面部动画,旨在在语言的帮助下对静态面部图像进行动画。为了更好地利用语言的语义和动作,我们提出了一个简单而有效的框架。具体而言,我们提出了一个经常性运动生成器,以从语言中提取一系列语义和运动信息,并将其与视觉信息一起提供给预训练的样式,以生成高质量的帧。为了优化所提出的框架,提出了三个精心设计的损失功能,包括保持面部身份的正规化损失,路径长度正规化损失以确保运动平滑度和对比度损失,以在一个模型中使用各种语言指导启用视频综合。对不同领域的定性和定量评估进行了广泛的实验(\ textit {ef。语。代码将在https://github.com/tiankaihang/language-guided-animation.git上找到。
translated by 谷歌翻译
最近,图神经网络显示了建模基于网络的推荐系统中复杂拓扑结构的优势。由于节点之间的各种相互作用以及来自各种类型的节点和边缘的大量语义,因此在多重异质网络中学习表达性节点表示的研究兴趣爆发。推荐系统中最重要的任务之一是预测特定边缘类型下两个节点之间的潜在连接(即关系)。尽管现有的研究利用明确的元数据来汇总邻居,但实际上,它们仅考虑了关系内部的元数据,因此无法通过相互关联信息来利用潜在的提升。此外,在各种关系下,尤其是在越来越多的节点和边缘类型的情况下,全面利用相互关系的元数据并不总是直接的。此外,两个节点之间不同关系的贡献很难衡量。为了应对挑战,我们提出了Hybridgnn,这是一种具有混合聚集流和分层的端到端GNN模型,以在多路复用方案中充分利用异质性。具体而言,Hybridgnn应用了一个随机的关系探索模块来利用不同关系之间的多重性属性。然后,我们的模型利用在关系内的元数据和随机探索下的混合聚集流以学习丰富的语义。为了探索不同聚合流的重要性并利用多重性属性,我们提出了一个新型的分层注意模块,该模块既利用了Metapath级别的注意力和关系级的关注。广泛的实验结果表明,与几个最先进的基线相比,Hybridgnn取得了最佳性能。
translated by 谷歌翻译
本文提出了一种新型的空中栖息轨迹计划方法。与现有工作相比,终端状态和轨迹持续时间可以自适应地调整,而不是预先确定。此外,我们的计划者能够最大程度地减少安全性和动态可行性前提的切向相对速度。此功能在具有低操作性或空间不够的情况下的微型航空机器人上特别值得注意。此外,我们设计了一种灵活的转换策略,以消除终端约束以及减少优化变量。此外,我们考虑了精确的SE(3)运动计划,以确保无人机直到最后一刻才能触及着陆平台。所提出的方法通过棕榈大小的微型航空机器人在船上进行了验证,其推力和力矩(推力重量比1.7)栖息在移动倾斜的表面上。足够的实验结果表明,我们的计划者在20ms内产生最佳轨迹,并以2ms的温暖起步进行补充。
translated by 谷歌翻译
我们研究了联合视频和语言(VL)预培训,以实现跨模型学习和益处丰富的下游VL任务。现有的作品要么提取低质量的视频特征或学习有限的文本嵌入,但忽略了高分辨率视频和多样化的语义可以显着提高跨模型学习。在本文中,我们提出了一种新的高分辨率和多样化的视频 - 语言预训练模型(HD-VILA),用于许多可视任务。特别是,我们收集具有两个不同属性的大型数据集:1)第一个高分辨率数据集包括371.5k小时的720p视频,2)最多样化的数据集涵盖15个流行的YouTube类别。为了启用VL预培训,我们通过学习丰富的时空特征的混合变压器联合优化HD-VILA模型,以及多峰变压器,用于强制学习视频功能与多样化文本的交互。我们的预训练模式实现了新的最先进的导致10 VL了解任务和2个新颖的文本到视觉生成任务。例如,我们以零拍摄MSR-VTT文本到视频检索任务的相对增加38.5%R @ 1的相对增长,高分辨率数据集LSMDC为53.6%。学习的VL嵌入也有效地在文本到视觉操纵和超分辨率任务中产生视觉上令人愉悦和语义相关结果。
translated by 谷歌翻译
这封信提供了在沟通限制下进行多机器人探索的完整框架会议 - 结合措施。考虑到沟通在现实世界中的带宽和范围都受到限制,我们提出了一种轻巧的环境演示方法和有效的合作探索策略。对于较低的带宽,每个机器人都利用特定的多面有来维护自由空间和超级边界信息(SFI)作为勘探决策的来源。为了减少重复的探索,我们开发了一种基于任务的协议,该协议驱动机器人以稳定的会合方式共享收集的信息。我们还为集中式和分散案件设计了完整的路径计划计划。为了验证我们的框架是实用且通用的,我们提出了广泛的基准,并将系统部署到多UGV和多UAV平台中。
translated by 谷歌翻译
Algorithmic fairness is becoming increasingly important in data mining and machine learning. Among others, a foundational notation is group fairness. The vast majority of the existing works on group fairness, with a few exceptions, primarily focus on debiasing with respect to a single sensitive attribute, despite the fact that the co-existence of multiple sensitive attributes (e.g., gender, race, marital status, etc.) in the real-world is commonplace. As such, methods that can ensure a fair learning outcome with respect to all sensitive attributes of concern simultaneously need to be developed. In this paper, we study the problem of information-theoretic intersectional fairness (InfoFair), where statistical parity, a representative group fairness measure, is guaranteed among demographic groups formed by multiple sensitive attributes of interest. We formulate it as a mutual information minimization problem and propose a generic end-to-end algorithmic framework to solve it. The key idea is to leverage a variational representation of mutual information, which considers the variational distribution between learning outcomes and sensitive attributes, as well as the density ratio between the variational and the original distributions. Our proposed framework is generalizable to many different settings, including other statistical notions of fairness, and could handle any type of learning task equipped with a gradient-based optimizer. Empirical evaluations in the fair classification task on three real-world datasets demonstrate that our proposed framework can effectively debias the classification results with minimal impact to the classification accuracy.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Blind image quality assessment (BIQA) remains challenging due to the diversity of distortion and image content variation, which complicate the distortion patterns crossing different scales and aggravate the difficulty of the regression problem for BIQA. However, existing BIQA methods often fail to consider multi-scale distortion patterns and image content, and little research has been done on learning strategies to make the regression model produce better performance. In this paper, we propose a simple yet effective Progressive Multi-Task Image Quality Assessment (PMT-IQA) model, which contains a multi-scale feature extraction module (MS) and a progressive multi-task learning module (PMT), to help the model learn complex distortion patterns and better optimize the regression issue to align with the law of human learning process from easy to hard. To verify the effectiveness of the proposed PMT-IQA model, we conduct experiments on four widely used public datasets, and the experimental results indicate that the performance of PMT-IQA is superior to the comparison approaches, and both MS and PMT modules improve the model's performance.
translated by 谷歌翻译
The development of social media user stance detection and bot detection methods rely heavily on large-scale and high-quality benchmarks. However, in addition to low annotation quality, existing benchmarks generally have incomplete user relationships, suppressing graph-based account detection research. To address these issues, we propose a Multi-Relational Graph-Based Twitter Account Detection Benchmark (MGTAB), the first standardized graph-based benchmark for account detection. To our knowledge, MGTAB was built based on the largest original data in the field, with over 1.55 million users and 130 million tweets. MGTAB contains 10,199 expert-annotated users and 7 types of relationships, ensuring high-quality annotation and diversified relations. In MGTAB, we extracted the 20 user property features with the greatest information gain and user tweet features as the user features. In addition, we performed a thorough evaluation of MGTAB and other public datasets. Our experiments found that graph-based approaches are generally more effective than feature-based approaches and perform better when introducing multiple relations. By analyzing experiment results, we identify effective approaches for account detection and provide potential future research directions in this field. Our benchmark and standardized evaluation procedures are freely available at: https://github.com/GraphDetec/MGTAB.
translated by 谷歌翻译
Given the increasingly intricate forms of partial differential equations (PDEs) in physics and related fields, computationally solving PDEs without analytic solutions inevitably suffers from the trade-off between accuracy and efficiency. Recent advances in neural operators, a kind of mesh-independent neural-network-based PDE solvers, have suggested the dawn of overcoming this challenge. In this emerging direction, Koopman neural operator (KNO) is a representative demonstration and outperforms other state-of-the-art alternatives in terms of accuracy and efficiency. Here we present KoopmanLab, a self-contained and user-friendly PyTorch module of the Koopman neural operator family for solving partial differential equations. Beyond the original version of KNO, we develop multiple new variants of KNO based on different neural network architectures to improve the general applicability of our module. These variants are validated by mesh-independent and long-term prediction experiments implemented on representative PDEs (e.g., the Navier-Stokes equation and the Bateman-Burgers equation) and ERA5 (i.e., one of the largest high-resolution data sets of global-scale climate fields). These demonstrations suggest the potential of KoopmanLab to be considered in diverse applications of partial differential equations.
translated by 谷歌翻译